ÁñÁ«ÊÓƵ

Banning journal impact factors is bad for Dutch science

<ÁñÁ«ÊÓƵ class="standfirst">Abandoning measurable evaluation criteria will make judgements more political and more random, say Raymond Poot and Willem Mulder
August 3, 2021
Divers feet and a measure stick showing above water level only as a metaphor for Journal impact factor ban is bad for Dutch science
Source: Alamy

Recently, Utrecht University that it will ban journal impact factors from evaluation procedures of scientists. Such measurable performance indices are to be abandoned in favour of an ¡°open science¡± system, which centralises the collective (team) at the expense of individual scientists.

However, we are concerned that Utrecht¡¯s new ¡°recognition and rewards¡± system will lead to randomness and a compromising of scientific quality, which will have serious consequences for the recognition and evaluation of Dutch scientists. In particular, it will have negative consequences for young scientists, who will no longer be able to compete internationally.??

Utrecht¡¯s assertion that the journal impact factor plays a disproportionately large role in the evaluation of researchers is misleading. For a considerable number of research fields, impact factors are not that relevant. To account for field-specific cultures, the field-weighted citation impact score was developed, which compares the total citations actually received by a scientist with the average of the subject field. For example, research groups in medical imaging typically publish their results in technical journals with relatively low impact factors. Although not groundbreaking, however, the development of faster MRI methods is very important. The Dutch Research Council (NWO) takes this into account in its awarding policies. Accordingly, many personal grants have been awarded by the NWO¡¯s career development programme to medical imaging researchers who never publish in high impact factor journals.

A second misconception is that a journal¡¯s impact factor does not correlate with the quality of its publications. An average paper in a top journal, such as Nature, Science or Cell, requires much more work than an average paper in a technical journal. Top journals get assistance from world experts and thereby safeguard high impact and quality. This does not mean that every paper in Nature is by definition better than a publication in a technical journal, but, by and large, new technologies and concepts that overthrow dogmas are published in the top journals.

ÁñÁ«ÊÓƵ

For the NWO¡¯s ¡°Veni, Vidi, Vici¡± talent programmes, the application format has changed radically over the past few years. The?curriculum vitae with objective information on publications, citations, lectures and so on is replaced by a ¡°narrative¡±. Reviewers?will no longer grade the proposal and are forced instead to fill out lists with strengths and weaknesses, irrespective of their opinion of the proposal. For some NWO competitions, CVs are removed altogether because of the emphasis on ¡°team science¡±.

The feedback from the assessment committees?is disturbing. Members do not have a clue how to compare candidates, and googling their performance numbers is banned. Reviewers, often recruited from outside the Netherlands, complain about the time-consuming format and sometimes simply refuse to judge the narrative.

ÁñÁ«ÊÓƵ

We believe that the NWO has the duty to allocate public funds in a way that supports the best and most talented scientists to discover new insights and innovations. We strongly support ¡°recognition and rewards¡± for academics who are not exclusively science-oriented, but consider this the responsibility of the universities, not of the NWO. University HR policies must offer different career tracks for academics who excel in non-science competencies.

Quantitatively analysing a problem is an important feature of scientific practice, particularly in the medical, life and exact sciences. In these disciplines, creative solutions are sought for problems worldwide. Scientific success is therefore more easily measurable and comparable. For more qualitative sciences, it is understandable that other ways to assess success can be used. We strongly support diverse ways to evaluate different science disciplines and suggest that fields themselves determine how scientists in their discipline are assessed.

Utrecht¡¯s policy puts a strong emphasis on open science, level of public engagement, public accessibility of data, composition of the research team and demonstrated leadership. These criteria are not scientific: they are political. Moreover, it is extremely difficult to measure them, let alone use them to conduct a fair comparison of different scientists. They should therefore not be the dominant criteria in the assessment of scientists. In particular, for the research track of the medical, life and exact sciences, internationally recognised and measurable criteria must be paramount.

The US, the world¡¯s science powerhouse, is on a completely different trajectory from the Netherlands. Big public funders such as the National Institutes of Health (NIH) and the National Science Foundation (NSF) focus solely on scientific excellence and have not signed the (Dora; also known as the San Francisco Declaration), which abolishes the impact factor as an evaluation index.

ÁñÁ«ÊÓƵ

We believe that the NWO and the Dutch universities should maintain objective and measurable standards for academics who primarily focus on research. We prefer scientists who are optimised for generating the best science, not for writing the prettiest narrative. This will be the best way both to benefit society and to safeguard the Netherlands¡¯ favourable position in international rankings.

Raymond Poot is an associate professor (UHD) at the Erasmus University Medical Center, Rotterdam. Willem Mulder is professor of?precision medicine at the Radboud University Medical Center and the Eindhoven University of Technology. This is a translated and edited version of an article that was in the Dutch journal Science Guide, which was signed by another 172 academics.

<ÁñÁ«ÊÓƵ class="pane-title"> POSTSCRIPT:

Print headline:?Journal impact factor ban is bad for Dutch science

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (6)
I hope you also publish the excellent reply https://www.scienceguide.nl/2021/07/we-moeten-af-van-telzucht-in-de-wetenschap/ by Annemijn Algra et al.
I found this article rather confusing, as it contains a lot of opinion but precious little evidence. It also starts with an illogical argument, saying "Utrecht's assertion that the journal impact factor plays a disproportionately large role in the evaluation of researchers is misleading. For a considerable number of research fields, impact factors are not that relevant." If that is so, why are the authors so worried about dropping the impact factor? They then go on to say to contrast the quality of papers in Cell/Nature/Science with that of papers in "technical journals" - I'm not sure what is meant by that - there are plenty of journals that fit neither category that report excellent work. Reviewing standards are at least as high in many society journals, which have editors who are familiar with the specific topic of a paper - more so than the journalist editors of the 'top' journals. The pressure to get publications in CNS is known to create perverse incentives (https://royalsocietypublishing.org/doi/10.1098/rsos.171511), and these journals have a relatively high rate of retractions: https://www.nature.com/articles/nature.2014.15951. It's also interesting that they see open science as a political rather than scientific matter. I could not disagree more: it's frustrating to read these 'high impact' papers in 'top' journals that make extraordinary claims but then just say 'data available on request' (it never is). If we cannot see the data and have a clear account of methods, then the research paper remains more like an advertisement than a scientific contribution. Finally, the authors' concern for early-career researchers is laudable, but have they surveyed early-career researchers to ask what they think about the new criteria?
And here is a response from some Dutch scientists who take a different perspective from the authors (including some of early-career researchers). https://recognitionrewards.nl/2021/08/03/why-the-new-recognition-rewards-actually-boosts-excellent-science/
Sigh. I suppose that there will always be some dinosaurs who oppose progress. The journal impact factor has been totally debunked for decades now. None of that work is referred to. I'll cite one example from my own experience. In 1981 we published a preliminary account of results that we'd obtained with the (then new) method for recording the activity of single ion channels. It was brief and crude, but in the early 80's anything with single channels in it sailed into Nature. After four more years of work we published a much better account of the work, 57 printed pages in the Journal of Physiology. The idea that the short note is worth more than the real paper is beyond absurd. How about reading the applicant's (self-nominated) three best papers? It doesn't matter a damn where they are published (or even whether they're published yet).
If the venue does not matter then every university should have an inhouse journal and academics should publish in those. Why bother with other journals in the first place?
So Poot and Mulder, both from Dutch medical centres, want to retain academic evaluation by journal impact factor. Their logic is hard to follow and harder to swallow: top journals have high impact factors and publish the best papers because they get assistance from world experts who safeguard high impact and quality. What does this mean? In many research fields, they say, journal impact factors are not nearly as significant as they are in medicine. Oh really? Academic performance is measured almost entirely by metrics these days and by far the most important of these is the journal impact factor. As David Colquhoun says, this is not because the journal impact factor is a good measure, but because it has long been gamed, and manipulation has been most successfully in medicine. For years, the editors of the Lancet and BMJ have bewailed the corruption that produces the high impact factors boasted by the top journals in medicine. Papers are written to order and to a formula that will generate citation and thereby contribute most to journal impact factor. Dozens typically claim authorship of a paper in a medical journal; equally typically, none of them wrote it. As citations grow older and ever more positive, the research base becomes shakier. Some of most prolific authors have never actually existed, nor have the papers they have written, or the journals in which they have published. The system is absurd and has long been recognized as quite daft, but a lot of capital has been sunk into working this system, and probably in no discipline more than medicine.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs